effective method
Supplementary Materials for " DropCov: A Simple yet Effective Method for Improving Deep Architectures " Qilong Wang
Our proposed DropCov can be flexibly integrated with existing deep architectures (e.g., CNNs [ Qinghua Hu is the corresponding author and is with Engineering Research Center of City intelligence and Digital Governance, Ministry of Education of the People's Republic of China. VGG-VD on three small-scale fine-grained datasets) show 0.5 is the best choices of As listed in Table S2, we can see that single L T module brings a little gain for plain GCP . Compared to B-CNN + L T (79.62% training accuracy), plain GCP GCP + L T, while B-CNN + L T achieves significant improvement over B-CNN and plain GCP . On the contrary, the samples involving less redundant information (e.g., scene) have large Such these phenomena show the consistency with our finding. Is second-order information helpful for large-scale visual recognition?
- Asia > China > Tianjin Province > Tianjin (0.05)
- Asia > China > Liaoning Province > Dalian (0.04)
DropCov: A Simple yet Effective Method for Improving Deep Architectures
Previous works show global covariance pooling (GCP) has great potential to improve deep architectures especially on visual recognition tasks, where post-normalization of GCP plays a very important role in final performance. Although several post-normalization strategies have been studied, these methods pay more close attention to effect of normalization on covariance representations rather than the whole GCP networks, and their effectiveness requires further understanding. Meanwhile, existing effective post-normalization strategies (e.g., matrix power normalization) usually suffer from high computational complexity (e.g., $O(d^{3})$ for $d$-dimensional inputs). To handle above issues, this work first analyzes the effect of post-normalization from the perspective of training GCP networks. Particularly, we for the first time show that \textit{effective post-normalization can make a good trade-off between representation decorrelation and information preservation for GCP, which are crucial to alleviate over-fitting and increase representation ability of deep GCP networks, respectively}. Based on this finding, we can improve existing post-normalization methods with some small modifications, providing further support to our observation. Furthermore, this finding encourages us to propose a novel pre-normalization method for GCP (namely DropCov), which develops an adaptive channel dropout on features right before GCP, aiming to reach trade-off between representation decorrelation and information preservation in a more efficient way. Our DropCov only has a linear complexity of $O(d)$, while being free for inference. Extensive experiments on various benchmarks (i.e., ImageNet-1K, ImageNet-C, ImageNet-A, Stylized-ImageNet, and iNat2017) show our DropCov is superior to the counterparts in terms of efficiency and effectiveness, and provides a simple yet effective method to improve performance of deep architectures involving both deep convolutional neural networks (CNNs) and vision transformers (ViTs).
Consciousness in Artificial Intelligence? A Framework for Classifying Objections and Constraints
Campero, Andres, Shiller, Derek, Aru, Jaan, Simon, Jonathan
We develop a taxonomical framework for classifying challenges to the possibility of consciousness in digital artificial intelligence systems. This framework allows us to identify the level of granularity at which a given challenge is intended (the levels we propose correspond to Marr's levels) and to disambiguate its degree of force: is it a challenge to computational functionalism that leaves the possibility of digital consciousness open (degree 1), a practical challenge to digital consciousness that suggests improbability without claiming impossibility (degree 2), or an argument claiming that digital consciousness is strictly impossible (degree 3)? We apply this framework to 14 prominent examples from the scientific and philosophical literature. Our aim is not to take a side in the debate, but to provide structure and a tool for disambiguating between challenges to computational functionalism and challenges to digital consciousness, as well as between different ways of parsing such challenges.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Africa > South Africa > Western Cape > Indian Ocean (0.04)
- (5 more...)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Issues (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)
Supplementary Materials for " DropCov: A Simple yet Effective Method for Improving Deep Architectures " Qilong Wang
Our proposed DropCov can be flexibly integrated with existing deep architectures (e.g., CNNs [ Qinghua Hu is the corresponding author and is with Engineering Research Center of City intelligence and Digital Governance, Ministry of Education of the People's Republic of China. VGG-VD on three small-scale fine-grained datasets) show 0.5 is the best choices of As listed in Table S2, we can see that single L T module brings a little gain for plain GCP . Compared to B-CNN + L T (79.62% training accuracy), plain GCP GCP + L T, while B-CNN + L T achieves significant improvement over B-CNN and plain GCP . On the contrary, the samples involving less redundant information (e.g., scene) have large Such these phenomena show the consistency with our finding. Is second-order information helpful for large-scale visual recognition?
- Asia > China > Tianjin Province > Tianjin (0.05)
- Asia > China > Liaoning Province > Dalian (0.04)
DropCov: A Simple yet Effective Method for Improving Deep Architectures
Previous works show global covariance pooling (GCP) has great potential to improve deep architectures especially on visual recognition tasks, where post-normalization of GCP plays a very important role in final performance. Although several post-normalization strategies have been studied, these methods pay more close attention to effect of normalization on covariance representations rather than the whole GCP networks, and their effectiveness requires further understanding. Meanwhile, existing effective post-normalization strategies (e.g., matrix power normalization) usually suffer from high computational complexity (e.g., O(d {3}) for d -dimensional inputs). To handle above issues, this work first analyzes the effect of post-normalization from the perspective of training GCP networks. Particularly, we for the first time show that \textit{effective post-normalization can make a good trade-off between representation decorrelation and information preservation for GCP, which are crucial to alleviate over-fitting and increase representation ability of deep GCP networks, respectively}.
All in One and One for All: A Simple yet Effective Method towards Cross-domain Graph Pretraining
Zhao, Haihong, Chen, Aochuan, Sun, Xiangguo, Cheng, Hong, Li, Jia
Large Language Models (LLMs) have revolutionized the fields of computer vision (CV) and natural language processing (NLP). One of the most notable advancements of LLMs is that a single model is trained on vast and diverse datasets spanning multiple domains -- a paradigm we term `All in One'. This methodology empowers LLMs with super generalization capabilities, facilitating an encompassing comprehension of varied data distributions. Leveraging these capabilities, a single LLM demonstrates remarkable versatility across a variety of domains -- a paradigm we term `One for All'. However, applying this idea to the graph field remains a formidable challenge, with cross-domain pretraining often resulting in negative transfer. This issue is particularly important in few-shot learning scenarios, where the paucity of training data necessitates the incorporation of external knowledge sources. In response to this challenge, we propose a novel approach called Graph COordinators for PrEtraining (GCOPE), that harnesses the underlying commonalities across diverse graph datasets to enhance few-shot learning. Our novel methodology involves a unification framework that amalgamates disparate graph datasets during the pretraining phase to distill and transfer meaningful knowledge to target tasks. Extensive experiments across multiple graph datasets demonstrate the superior efficacy of our approach. By successfully leveraging the synergistic potential of multiple graph datasets for pretraining, our work stands as a pioneering contribution to the realm of graph foundational model.
- North America > United States > Texas (0.05)
- North America > United States > Wisconsin (0.05)
- North America > United States > District of Columbia > Washington (0.05)
- (4 more...)
Sequential Integrated Gradients: a simple but effective method for explaining language models
Several explanation methods such as Integrated Gradients (IG) can be characterised as path-based methods, as they rely on a straight line between the data and an uninformative baseline. However, when applied to language models, these methods produce a path for each word of a sentence simultaneously, which could lead to creating sentences from interpolated words either having no clear meaning, or having a significantly different meaning compared to the original sentence. In order to keep the meaning of these sentences as close as possible to the original one, we propose Sequential Integrated Gradients (SIG), which computes the importance of each word in a sentence by keeping fixed every other words, only creating interpolations between the baseline and the word of interest. Moreover, inspired by the training procedure of several language models, we also propose to replace the baseline token "pad" with the trained token "mask". While being a simple improvement over the original IG method, we show on various models and datasets that SIG proves to be a very effective method for explaining language models.
- Europe > Italy > Marche > Ancona Province > Ancona (0.04)
- Europe > Sweden > Östergötland County > Linköping (0.04)
- Europe > Finland > Southwest Finland > Turku (0.04)
An Overview of Boosting Methods: CatBoost, XGBoost, AdaBoost, LightBoost, Histogram-Based Gradient…
In ensemble learning, it is aimed to train the model most successfully with multiple learning algorithms. In one of the ensemble learning, Bagging method, more than one model was applied to different subsamples of the same dataset in parallel. Boosting, which is another method and frequently used in practice, builds sequentially instead of parallelly and aims to train the algorithm as well as training the model. A weak algorithm trains the model, then it is re-organized according to the training results and it is made easier to learn. This modified model is then sent to the next algorithm and the second algorithm learns easier than the first one. This article contains different boosting methods that interpret this sequential method from different angles.
AI Marker-based Large-scale AI Literature Mining
Yao, Rujing, Ye, Yingchun, Zhang, Ji, Li, Shuxiao, Wu, Ou
The knowledge contained in academic literature is interesting to mine. Inspired by the idea of molecular markers tracing in the field of biochemistry, three named entities, namely, methods, datasets and metrics are used as AI markers for AI literature. These entities can be used to trace the research process described in the bodies of papers, which opens up new perspectives for seeking and mining more valuable academic information. Firstly, the entity extraction model is used in this study to extract AI markers from large-scale AI literature. Secondly, original papers are traced for AI markers. Statistical and propagation analysis are performed based on tracing results. Finally, the co-occurrences of AI markers are used to achieve clustering. The evolution within method clusters and the influencing relationships amongst different research scene clusters are explored. The above-mentioned mining based on AI markers yields many meaningful discoveries. For example, the propagation of effective methods on the datasets is rapidly increasing with the development of time; effective methods proposed by China in recent years have increasing influence on other countries, whilst France is the opposite. Saliency detection, a classic computer vision research scene, is the least likely to be affected by other research scenes.
- Europe > France (0.25)
- North America > United States (0.14)
- Europe > United Kingdom (0.05)
- (10 more...)
- Media (0.46)
- Leisure & Entertainment (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Data Science > Data Mining (0.93)